Goto

Collaborating Authors

 civil rights & constitutional law


Major UK retailers urged to quit 'authoritarian' police facial recognition strategy

The Guardian > Business

Some of Britain's biggest retailers, including Tesco, John Lewis and Sainsbury's, have been urged to pull out of a new policing strategy amid warnings it risks wrongly criminalising people of colour, women and LGBTQ people. A coalition of 14 human rights groups has written to the main retailers – also including Marks & Spencer, the Co-op, Next, Boots and Primark – saying that their participation in a new government-backed scheme that relies heavily on facial recognition technology to combat shoplifting will "amplify existing inequalities in the criminal justice system". The letter, from Liberty, Amnesty International and Big Brother Watch, among others, questions the unchecked rollout of a technology that has provoked fierce criticism over its impact on privacy and human rights at a time when the European Union is seeking to ban the technology in public spaces through proposed legislation. "Facial recognition technology notoriously misidentifies people of colour, women and LGBTQ people, meaning that already marginalised groups are more likely to be subject to an invasive stop by police, or at increased risk of physical surveillance, monitoring and harassment by workers in your stores," the letter states.Its authors also express dismay that the move will "reverse steps" that big retailers introduced during the Black Lives Matter movement, including high-profile commitments to be champions of diversity, equality and inclusion. Meanwhile, concerns over the broadening use of facial recognition technology have further intensified after the emergence of details of a police watchlist used to justify the contentious decision to use biometric surveillance at July's Formula One British Grand Prix at Silverstone.


ChatGPT Crowns Clarence Thomas As Champion Of Gay Rights In Feedback Loop Of Stupid - Above the LawAbove the Law

#artificialintelligence

Everyone is chattering about ChatGPT. Can it pass the bar exam? No, though it performs well on some sections. Which should force a serious reevaluation of the test's ultimate value to the profession, but instead will convince bar examiners to introduce cavity searches. And, as The Onion points out, ChatGPT was as depressed to take the test as the rest of us.


ChatGPT Isn't the Only Way to Use AI in Education

WIRED

Soon after ChatGPT broke the internet, it sparked an all-too-familiar question for new technologies: What can it do for education? Many feared it would worsen plagiarism and further damage an already decaying humanism in the academy, while others lauded its potential to spark creativity and handle mundane educational tasks. Of course, ChatGPT is just one of many advances in artificial intelligence that have the capacity to alter pedagogical practices. The allure of AI-powered tools to help individuals maximize their understanding of academic subjects (or more effectively prepare for exams) by offering them the right content, in the right way, at the right time for them has spurred new investments from governments and private philanthropies. There is reason to be excited about such tools, especially if they can mitigate barriers to a higher quality or life--like reading proficiency disparities by race, which the NAACP has highlighted as a civil rights issue.


Artificial Intelligence Takes Center Stage at EEOC

#artificialintelligence

The U.S. Equal Employment Opportunity Commission (EEOC) recently released a draft of its new Strategic Enforcement Plan (SEP), outlining its priorities in tackling workplace discrimination over the next four years. The playbook, published in the Federal Register in January, indicates that the agency will be on the lookout for discrimination caused by artificial intelligence (AI) tools. "The EEOC is signaling in its draft SEP that it intends to enforce federal nondiscrimination laws equally, whether the discrimination takes place through traditional recruiting or through the use of modern and automated tools," said Andrew M. Gordon, an attorney with the law firm Hinshaw & Culbertson LLP in Fort Lauderdale, Fla. Over the last decade, AI use in the workplace has skyrocketed. Nearly 1 in 4 organizations uses AI to support HR-related activities, according to a 2022 survey by the Society for Human Resource Management (SHRM).


Artificially intelligent robot perpetuates racist and sexist prejudice

New Scientist - News

A robot running an artificial intelligence (AI) model carries out actions that perpetuate racist and sexist stereotypes, highlighting the issues that exist when tech learns from data sets with inherent biases.


Why AI Needs a Social License

#artificialintelligence

If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn't enough. It must obtain society's explicit approval to deploy the technology. Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for "thinking about you," mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as "AI with zero chill," Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter. Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM's Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay.


On the Opportunities and Risks of Foundation Models

Bommasani, Rishi, Hudson, Drew A., Adeli, Ehsan, Altman, Russ, Arora, Simran, von Arx, Sydney, Bernstein, Michael S., Bohg, Jeannette, Bosselut, Antoine, Brunskill, Emma, Brynjolfsson, Erik, Buch, Shyamal, Card, Dallas, Castellon, Rodrigo, Chatterji, Niladri, Chen, Annie, Creel, Kathleen, Davis, Jared Quincy, Demszky, Dora, Donahue, Chris, Doumbouya, Moussa, Durmus, Esin, Ermon, Stefano, Etchemendy, John, Ethayarajh, Kawin, Fei-Fei, Li, Finn, Chelsea, Gale, Trevor, Gillespie, Lauren, Goel, Karan, Goodman, Noah, Grossman, Shelby, Guha, Neel, Hashimoto, Tatsunori, Henderson, Peter, Hewitt, John, Ho, Daniel E., Hong, Jenny, Hsu, Kyle, Huang, Jing, Icard, Thomas, Jain, Saahil, Jurafsky, Dan, Kalluri, Pratyusha, Karamcheti, Siddharth, Keeling, Geoff, Khani, Fereshte, Khattab, Omar, Kohd, Pang Wei, Krass, Mark, Krishna, Ranjay, Kuditipudi, Rohith, Kumar, Ananya, Ladhak, Faisal, Lee, Mina, Lee, Tony, Leskovec, Jure, Levent, Isabelle, Li, Xiang Lisa, Li, Xuechen, Ma, Tengyu, Malik, Ali, Manning, Christopher D., Mirchandani, Suvir, Mitchell, Eric, Munyikwa, Zanele, Nair, Suraj, Narayan, Avanika, Narayanan, Deepak, Newman, Ben, Nie, Allen, Niebles, Juan Carlos, Nilforoshan, Hamed, Nyarko, Julian, Ogut, Giray, Orr, Laurel, Papadimitriou, Isabel, Park, Joon Sung, Piech, Chris, Portelance, Eva, Potts, Christopher, Raghunathan, Aditi, Reich, Rob, Ren, Hongyu, Rong, Frieda, Roohani, Yusuf, Ruiz, Camilo, Ryan, Jack, Ré, Christopher, Sadigh, Dorsa, Sagawa, Shiori, Santhanam, Keshav, Shih, Andy, Srinivasan, Krishnan, Tamkin, Alex, Taori, Rohan, Thomas, Armin W., Tramèr, Florian, Wang, Rose E., Wang, William, Wu, Bohan, Wu, Jiajun, Wu, Yuhuai, Xie, Sang Michael, Yasunaga, Michihiro, You, Jiaxuan, Zaharia, Matei, Zhang, Michael, Zhang, Tianyi, Zhang, Xikun, Zhang, Yuhui, Zheng, Lucia, Zhou, Kaitlyn, Liang, Percy

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.


The State of AI Ethics Report (Volume 5)

Gupta, Abhishek, Wright, Connor, Ganapini, Marianna Bergamaschi, Sweidan, Masa, Butalid, Renjie

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute covers the most salient progress in research and reporting over the second quarter of 2021 in the field of AI ethics with a special emphasis on "Environment and AI", "Creativity and AI", and "Geopolitics and AI." The report also features an exclusive piece titled "Critical Race Quantum Computer" that applies ideas from quantum physics to explain the complexities of human characteristics and how they can and should shape our interactions with each other. The report also features special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice. Given MAIEI's mission to highlight scholars from around the world working on AI ethics issues, the report also features two spotlights sharing the work of scholars operating in Singapore and Mexico helping to shape policy measures as they relate to the responsible use of technology. The report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI covering areas of bias, privacy, transparency, accountability, fairness, interpretability, disinformation, policymaking, law, regulations, and moral philosophy.


The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI

von Struensee, Susan

arXiv.org Artificial Intelligence

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.


The State of AI Ethics Report (January 2021)

Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Khan, Falaah Arif, Heath, Victoria, Galinkin, Erick, Khurana, Ryan, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Sweidan, Masa, Akif, Mo, Butalid, Renjie

arXiv.org Artificial Intelligence

The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.